Maximum Margin Linear Classifiers in Unions of Subspaces
نویسندگان
چکیده
In this work, we propose a framework, dubbed Union-of-Subspaces SVM (US-SVM), to learn linear classifiers as sparse codes over a learned dictionary. In contrast to discriminative sparse coding with a learned dictionary, it is not the data but the classifiers that are sparsely encoded. Experiments in visual categorization demonstrate that, at training time, the joint learning of the classifiers and of the over-complete dictionary allows the discovery and sharing of mid-level attributes. The resulting classifiers further have a very compact representation in the learned dictionaries, offering substantial performance advantages over standard SVM classifiers for a fixed representation sparsity. This high degree of sparsity of our classifier also provides computational gains, especially in the presence of numerous classes. In addition, the learned atoms can help identify several intra-class modalities.
منابع مشابه
Online Learning of Maximum p-Norm Margin Classifiers with Bias
We propose a new online learning algorithm which provably approximates maximum margin classifiers with bias, where the margin is defined in terms of p-norm distance. Although learning of linear classifiers with bias can be reduced to learning of those without bias, the known reduction might lose the margin and slow down the convergence of online learning algorithms. Our algorithm, unlike previo...
متن کاملDiscriminative Learning via Semidefinite Probabilistic Models
Discriminative linear models are a popular tool in machine learning. These can be generally divided into two types: linear classifiers, such as support vector machines (SVMs), which are well studied and provide stateof-the-art results, and probabilistic models such as logistic regression. One shortcoming of SVMs is that their output (known as the ”margin”) is not calibrated, so that it is diffi...
متن کاملA Systematic Cross-Comparison of Sequence Classifiers
In the CoNLL 2003 NER shared task, more than two thirds of the submitted systems used a feature-rich representation of the task. Most of them used the maximum entropy principle to combine the features together. Others used large margin linear classifiers, such as SVM and RRM. In this paper, we compare several common classifiers under exactly the same conditions, demonstrating that the ranking o...
متن کاملHow Can Deep Rectifier Networks Achieve Linear Separability and Preserve Distances?
This paper investigates how hidden layers of deep rectifier networks are capable of transforming two or more pattern sets to be linearly separable while preserving the distances with a guaranteed degree, and proves the universal classification power of such distance preserving rectifier networks. Through the nearly isometric nonlinear transformation in the hidden layers, the margin of the linea...
متن کاملOnline Learning of Approximate Maximum Margin Classifiers with Biases
We consider online learning of linear classifiers which approximately maximize the 2-norm margin. Given a linearly separable sequence of instances, typical online learning algorithms such as Perceptron and its variants, map them into an augmented space with an extra dimension, so that those instances are separated by a linear classifier without a constant bias term. However, this mapping might ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2016